Goto

Collaborating Authors

 memorized text


An Evaluation on Large Language Model Outputs: Discourse and Memorization

de Wynter, Adrian, Wang, Xun, Sokolov, Alex, Gu, Qilong, Chen, Si-Qing

arXiv.org Artificial Intelligence

We present an empirical evaluation of various outputs generated by nine of the most widely-available large language models (LLMs). Our analysis is done with off-the-shelf, readily-available tools. We find a correlation between percentage of memorized text, percentage of unique text, and overall output quality, when measured with respect to output pathologies such as counterfactual and logically-flawed statements, and general failures like not staying on topic. Overall, 80.0% of the outputs evaluated contained memorized data, but outputs containing the most memorized content were also more likely to be considered of high quality. We discuss and evaluate mitigation strategies, showing that, in the models evaluated, the rate of memorized text being output is reduced. We conclude with a discussion on potential implications around what it means to learn, to memorize, and to evaluate quality text.


OpenAI GPT leaking your data

#artificialintelligence

In this series around GPT language model, we will focus on the paper "Extract Training Data from Large Language Models" The authors want to show that they can extract verbatim data from a language model such as GPT-2. More interestingly, they explain that they can extract verbatim that have appeared only a few times in the training data from the model itself. Naturally, that can be very dangerous if you own a company and you are using customers' data to train a language model. In their own words, "the paper demonstrates that (…), an adversary can perform a training data extraction attack to recover individual training examples by querying the language model." Who would want to risk leaking private information?